430 research outputs found

    A multi-granularity locally optimal prototype-based approach for classification

    Get PDF
    Prototype-based approaches generally provide better explainability and are widely used for classification. However, the majority of them suffer from system obesity and lack transparency on complex problems. In this paper, a novel classification approach with a multi-layered system structure self-organized from data is proposed. This approach is able to identify local peaks of multi-modal density derived from static data and filter out more representative ones at multiple levels of granularity acting as prototypes. These prototypes are then optimized to their locally optimal positions in the data space and arranged in layers with meaningful dense links in-between to form pyramidal hierarchies based on the respective levels of granularity accordingly. After being primed offline, the constructed classification model is capable of self-developing continuously from streaming data to self-expend its knowledge base. The proposed approach offers higher transparency and is convenient for visualization thanks to the hierarchical nested architecture. Its system identification process is objective, data-driven and free from prior assumptions on data generation model with user- and problem- specific parameters. Its decision-making process follows the “nearest prototype” principle, and is highly explainable and traceable. Numerical examples on a wide range of benchmark problems demonstrate its high performance

    Autonomous Learning Multi-Model Classifier of 0-Order (ALMMo-0)

    Get PDF
    In this paper, a new type of 0-order multi-model classifier, called Autonomous Learning Multiple-Model (ALMMo-0), is proposed. The proposed classifier is non-iterative, feedforward and entirely data-driven. It automatically extracts the data clouds from the data per class and forms 0-order AnYa type fuzzy rule-based (FRB) sub-classifier for each class. The classification of new data is done using the “winner takes all” strategy according to the scores of confidence generated objectively based on the mutual distribution and ensemble properties of the data by the sub-classifiers. Numerical examples based on benchmark datasets demonstrate the high performance and computation-efficiency of the proposed classifier

    Local modes-based free-shape data partitioning

    Get PDF
    In this paper, a new data partitioning algorithm, named “local modes-based data partitioning”, is proposed. This algorithm is entirely data-driven and free from any user input and prior assumptions. It automatically derives the modes of the empirically observed density of the data samples and results in forming parameter-free data clouds. The identified focal points resemble Voronoi tessellations. The proposed algorithm has two versions, namely, offline and evolving. The two versions are both able to work separately and start “from scratch”, they can also perform a hybrid. Numerical experiments demonstrate the validity of the proposed algorithm as a fully autonomous partitioning technique, and achieve better performance compared with alternative algorithms

    MICE:Multi-layer multi-model images classifier ensemble

    Get PDF
    In this paper, a new type of fast deep learning (DL) network for handwriting recognition is proposed. In contrast to the existing DL networks the proposed approach has clearly interpretable structure that is entirely data-driven and free from user- or problem-specific assumptions. It is entirely parallelizable and very efficient. First, same fundamental image transformation techniques (rotation and scaling) that are used by other existing DL methods are used to improve the generalization. The commonly used descriptors are then used to extract the global features from the training set and based on them a bank/ensemble of zero order AnYa type fuzzy rule-based (FRB) models is built through the recently introduced Autonomous Learning Multiple Model (ALMMo) method working in parallel. The final decision about the winning class label is made by a committee on the basis of the fuzzy mixture of the trained ALMMo-0 models (where “0” stands for 0 order meaning that the consequent represent a class label, a singleton, not a regression model as in the first order). The training of the proposed MICE system is very efficient and highly parallelizable. It significantly outperforms the best known methods in terms of time and is on par in terms of precision/accuracy. Critically, it offers a high level of interpretability, transparency of the classification model, full repeatability (unlike the methods that use probabilistic elements) of the results. Moreover, it allows an evolving scenario whereby the data is provided in an incremental, online manner and the system structure is being developed in parallel with the classification which opens opportunities for online and real-time applications (on a sample by sample basis). Numerical examples from the well-known handwritten digits recognition problem (MNIST) were used and the results demonstrated the very high repeatable performance after a very short training process which is in addition to the high level of interpretability, transparency

    Self-Organizing Fuzzy Belief Inference System for Classification

    Get PDF
    Evolving fuzzy systems (EFSs) are widely known as a powerful tool for streaming data prediction. In this paper, a novel zero-order EFS with a unique belief structure is proposed for data stream classification. Thanks to this new belief structure, the proposed model can handle the inter-class overlaps in a natural way and better capture the underlying multi-model structure of data streams in the form of prototypes. Utilizing data-driven soft thresholds, the proposed model self-organizes a set of prototype-based IF-THEN fuzzy belief rules from data streams for classification, and its learning outcomes are practically meaningful. With no requirement of prior knowledge in the problem domain, the proposed model is capable of self-determining the appropriate level of granularity for rule base construction, while enabling users to specify their preferences on the degree of fineness of its knowledge base. Numerical examples demonstrate the superior performance of the proposed model on a wide range of stationary and nonstationary classification benchmark problems

    Self-Organizing Fuzzy Inference Ensemble System for Big Streaming Data Classification

    Get PDF
    An evolving intelligent system (EIS) is able to self-update its system structure and meta-parameters from streaming data. However, since the majority of EISs are implemented on a single-model architecture, their performances on large-scale, complex data streams are often limited. To address this deficiency, a novel self-organizing fuzzy inference ensemble framework is proposed in this paper. As the base learner of the proposed ensemble system, the self-organizing fuzzy inference system is capable of self-learning a highly transparent predictive model from streaming data on a chunk-by-chunk basis through a human-interpretable process. Very importantly, the base learner can continuously self-adjust its decision boundaries based on the inter-class and intra-class distances between prototypes identified from successive data chunks for higher classification precision. Thanks to its parallel distributed computing architecture, the proposed ensemble framework can achieve great classification precision while maintain high computational efficiency on large-scale problems. Numerical examples based on popular benchmark big data problems demonstrate the superior performance of the proposed approach over the state-of-the-art alternatives in terms of both classification accuracy and computational efficiency

    Applications of Magnetic Microbubbles for Theranostics

    Get PDF
    Compared with other diagnostic methods, ultrasound is proven to be a safe, simple, non-invasive and cost-effective imaging technique, but the resolution is not comparable to that of magnetic resonance imaging (MRI). Contrast-enhanced ultrasound employing microbubbles can gain a better resolution and is now widely used to diagnose a number of diseases in the clinic. For the last decade, microbubbles have been widely used as ultrasound contrast agents, drug delivery systems and nucleic acid transfection tools. However, microbubbles are not fairly stable enough in some conditions and are not well administrated distributed in the circulation system. On the other hand, magnetic nanoparticles, as MRI contrast agents, can non-specifically penetrate into normal tissues because of their relatively small sizes. By taking advantage of these two kinds of agents, the magnetic microbubbles which couple magnetic iron oxides nanoparticles in the microbubble structure have been explored. The stability of microbubbles can be raised by encapsulating magnetic nanoparticles into the bubble shells and with the guidance of magnetic field, magnetic microbubbles can be delivered to regions of interest, and after appropriate ultrasound exposure, the nanoparticles can be released to the desired area while the magnetic microbubbles collapse. In this review, we summarize magnetic microbubbles used in diagnostic and therapeutic fields, and predict the potential applications of magnetic microbubbles in the future

    Fast feedforward non-parametric deep learning network with automatic feature extraction

    Get PDF
    In this paper, a new type of feedforward non-parametric deep learning network with automatic feature extraction is proposed. The proposed network is based on human-understandable local aggregations extracted directly from the images. There is no need for any feature selection and parameter tuning. The proposed network involves nonlinear transformation, segmentation operations to select the most distinctive features from the training images and builds RBF neurons based on them to perform classification with no weights to train. The design of the proposed network is very efficient (computation and time wise) and produces highly accurate classification results. Moreover, the training process is parallelizable, and the time consumption can be further reduced with more processors involved. Numerical examples demonstrate the high performance and very short training process of the proposed network for different applications

    Fast feedforward non-parametric deep learning network with automatic feature extraction

    Get PDF
    In this paper, a new type of feedforward non-parametric deep learning network with automatic feature extraction is proposed. The proposed network is based on human-understandable local aggregations extracted directly from the images. There is no need for any feature selection and parameter tuning. The proposed network involves nonlinear transformation, segmentation operations to select the most distinctive features from the training images and builds RBF neurons based on them to perform classification with no weights to train. The design of the proposed network is very efficient (computation and time wise) and produces highly accurate classification results. Moreover, the training process is parallelizable, and the time consumption can be further reduced with more processors involved. Numerical examples demonstrate the high performance and very short training process of the proposed network for different applications

    Particle Swarm Optimized Autonomous Learning Fuzzy System

    Get PDF
    The antecedent and consequent parts of a first-order evolving intelligent system (EIS) determine the validity of the learning results and overall system performance. Nonetheless, the state-of-the-art techniques mostly stress on the novelty from the system identification point of view but pay less attention to the optimality of the learned parameters. Using the recently introduced autonomous learning multiple model (ALMMo) system as the implementation basis, this paper introduces a particles warm-based approach for EIS optimization. The proposed approach is able to simultaneously optimize the antecedent and consequent parameters of ALMMo and effectively enhance the system performance by iteratively searching for optimal solutions in the problem spaces. In addition, the proposed optimization approach does not adversely influence the “one pass” learning ability of ALMMo. Once the optimization process is complete, ALMMo can continue to learn from new data to incorporate unseen data patterns recursively without a full retraining. Experimental studies with a number of real-world benchmark problems validate the proposed concept and general principles. It is also verified that the proposed optimization approach can be applied to other types of EISs with similar operating mechanisms
    corecore